An Algorithm for Unconstrained Quadratically Penalized Convex Optimization

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Algorithm for Unconstrained Quadratically Penalized Convex Optimization

A descent algorithm, “Quasi-Quadratic Minimization with Memory” (QQMM), is proposed for unconstrained minimization of the sum, F , of a non-negative convex function, V , and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of F , QQMM requires the (sub)gradient of V . Two features of QQMM help keep low the number of eval...

متن کامل

An Efficient Conjugate Gradient Algorithm for Unconstrained Optimization Problems

In this paper, an efficient conjugate gradient method for unconstrained optimization is introduced. Parameters of the method are obtained by solving an optimization problem, and using a variant of the modified secant condition. The new conjugate gradient parameter benefits from function information as well as gradient information in each iteration. The proposed method has global convergence und...

متن کامل

A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization

The Quadratic Convex Reformulation (QCR) method is used to solve quadratic unconstrained binary optimization problems. In this method, the semidefinite relaxation is used to reformulate it to a convex binary quadratic program which is solved using mixed integer quadratic programming solvers. We extend this method to random quadratic unconstrained binary optimization problems. We develop a Penal...

متن کامل

A new hybrid conjugate gradient algorithm for unconstrained optimization

In this paper, a new hybrid conjugate gradient algorithm is proposed for solving unconstrained optimization problems. This new method can generate sufficient descent directions unrelated to any line search. Moreover, the global convergence of the proposed method is proved under the Wolfe line search. Numerical experiments are also presented to show the efficiency of the proposed algorithm, espe...

متن کامل

Regularized Newton method for unconstrained convex optimization

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Communications in Statistics - Simulation and Computation

سال: 2011

ISSN: 0361-0918,1532-4141

DOI: 10.1080/03610918.2011.560734